120 research outputs found

    What are the predictors of change in multimorbidity among people with HIV? : a longitudinal observational cohort study

    Get PDF
    Introduction: Multimorbidity is common among people living with HIV (PLWH), with numerous cross-sectional studies demonstrating associations with older age and past immunosuppression. Little is known about the progression of multimorbidity, particularly in the setting of long-term access to antiretrovirals. This study aims to determine factors predictive of change in multimorbidity in PLWH. Methods: People living with HIV who attended a regional HIV service were recruited to a consented observational cohort between September 2016 and March 2020. Demographic data, laboratory results and a Cumulative Illness Rating Scale (CIRS) were collected at enrolment and first clinical review of every subsequent year. Change in CIRS score was calculated from enrolment to February 2021. Associations with change were determined through univariate and multivariate linear regression. Results: Of 253 people, median age was 58.9 [interquartile range (IQR): 51.9–64.4] years, 91.3% were male, and HIV was diagnosed a median of 22.16 years (IQR: 12.1–30.9) beforehand. Length of time in the study was a median of 134 weeks (IQR: 89.0–179.0), in which a mean CIRS score change of 1.21 (SD 2.60) was observed. Being older (p < 0.001) and having a higher body mass index (p = 0.008) and diabetes (p = 0.014) were associated with an increased likelihood of worsening multimorbidity. PLWH with a higher level of multimorbidity at baseline were less likely to worsen over time (p < 0.001). Conclusion: As diabetes and weight predict worsening multimorbidity, routine diabetes screening, body mass index measurement, and multimorbidity status awareness are recommended

    Synthesizing benchmarks for predictive modeling

    Get PDF
    Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space. We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code. We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27×. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30×

    Function Merging by Sequence Alignment

    Get PDF
    Resource-constrained devices for embedded systems are becoming increasingly important. In such systems, memory is highly restrictive, making code size in most cases even more important than performance. Compared to more traditional platforms, memory is a larger part of the cost and code occupies much of it. Despite that, compilers make little effort to reduce code size. One key technique attempts to merge the bodies of similar functions. However, production compilers only apply this optimization to identical functions, while research compilers improve on that by merging the few functions with identical control-flow graphs and signatures. Overall, existing solutions are insufficient and we end up having to either increase cost by adding more memory or remove functionality from programs. We introduce a novel technique that can merge arbitrary functions through sequence alignment, a bioinformatics algorithm for identifying regions of similarity between sequences. We combine this technique with an intelligent exploration mechanism to direct the search towards the most promising function pairs. Our approach is more than 2.4x better than the state-of-the-art, reducing code size by up to 25%, with an overall average of 6%, while introducing an average compilation-time overhead of only 15%. When aided by profiling information, this optimization can be deployed without any significant impact on the performance of the generated code

    Temporal trends of time to antiretroviral treatment initiation, interruption and modification: Examination of patients diagnosed with advanced HIV in Australia

    Get PDF
    INTRODUCTION: HIV prevention strategies are moving towards reducing plasma HIV RNA viral load in all HIV-positive persons, including those undiagnosed, treatment naïve, on or off antiretroviral therapy. A proxy population for those undiagnosed are patients that present late to care with advanced HIV. The objectives of this analysis are to examine factors associated with patients presenting with advanced HIV, and establish rates of treatment interruption and modification after initiating ART. METHODS: We deterministically linked records from the Australian HIV Observational Database to the Australian National HIV Registry to obtain information related to HIV diagnosis. Logistic regression was used to identify factors associated with advanced HIV diagnosis. We used survival methods to evaluate rates of ART initiation by diagnosis CD4 count strata and by calendar year of HIV diagnosis. Cox models were used to determine hazard of first ART treatment interruption (duration >30 days) and time to first major ART modification. RESULTS: Factors associated (p<0.05) with increased odds of advanced HIV diagnosis were sex, older age, heterosexual mode of HIV exposure, born overseas and rural-regional care setting. Earlier initiation of ART occurred at higher rates in later periods (2007-2012) in all diagnosis CD4 count groups. We found an 83% (69, 91%) reduction in the hazard of first treatment interruption comparing 2007-2012 versus 1996-2001 (p<0.001), and no difference in ART modification for patients diagnosed with advanced HIV. CONCLUSIONS: Recent HIV diagnoses are initiating therapy earlier in all diagnosis CD4 cell count groups, potentially lowering community viral load compared to earlier time periods. We found a marked reduction in the hazard of first treatment interruption, and found no difference in rates of major modification to ART by HIV presentation status in recent periods

    HyFM: Function Merging for Free

    Get PDF
    Function merging is an important optimization for reducing code size. It merges multiple functions into a single one, eliminating duplicate code among them. The existing state-of-the-art relies on a well-known sequence alignment algorithm to identify duplicate code across whole functions. However, this algorithm is quadratic in time and space on the number of instructions. This leads to very high time overheads and prohibitive levels of memory usage even for medium-sized benchmarks. For larger programs, it becomes impractical. This is made worse by an overly eager merging approach. All selected pairs of functions will be merged. Only then will this approach estimate the potential benefit from merging and decide whether to replace the original functions with the merged one. Given that most pairs are unprofitable, a significant amount of time is wasted producing merged functions that are simply thrown away. In this paper, we propose HyFM, a novel function merging technique that delivers similar levels of code size reduction for significantly lower time overhead and memory usage. Unlike the state-of-the-art, our alignment strategy works at the block level. Since basic blocks are usually much shorter than functions, even a quadratic alignment is acceptable. However, we also propose a linear algorithm for aligning blocks of the same size at a much lower cost. We extend this strategy with a multi-tier profitability analysis that bails out early from unprofitable merging attempts. By aligning individual pairs of blocks, we are able to decide their alignment’s profitability separately and before actually generating code. Experimental results on SPEC 2006 and 2017 show that HyFM needs orders of magnitude less memory, using up to 48 MB or 5.6 MB, depending on the variant used, while the state-of-the-art requires 32 GB in the worst case. HyFM also runs over 4.5×× faster, while still achieving comparable code size reduction. Combined with the speedup of later compilation stages due to the reduced number of functions, HyFM contributes to a reduced end-to-end compilation time

    Effect of Changes in Body Mass Index on the Risk of Cardiovascular Disease and Diabetes Mellitus in HIV-Positive Individuals: Results From the D:A:D Study

    Get PDF
    BACKGROUND: Weight gain is common among people with HIV once antiretroviral treatment (ART) is commenced. We assess the effect of changes in body mass index (BMI), from different baseline BMI levels, on the risk of cardiovascular disease (CVD) and diabetes mellitus (DM). METHODS: D:A:D participants receiving ART were followed from their first BMI measurement to the first of either CVD or DM event, or earliest of 1/2/2016 or 6 months after last follow-up. Participants were stratified according to their baseline BMI, and changes from baseline BMI were calculated for each participant. Poisson regression models were used to assess the effects of changes on BMI on CVD or DM events. RESULTS: There were 2,104 CVD and 1,583 DM events over 365,287 and 354,898 person years (rate: CVD 5.8/1000 (95% CI 5.5-6.0); DM 4.5/1000 (95% CI 4.2 - 4.7)). Participants were largely male (74%), baseline mean age of 40 years and median BMI of 23.0 (IQR: 21.0-25.3). Risk of CVD by change in BMI from baseline, stratified by baseline BMI strata showed little evidence of an increased risk of CVD with an increased BMI in any baseline BMI strata. An increase in BMI was associated with an increased risk of DM across all baseline BMI strata. CONCLUSIONS: While increases in BMI across all levels of baseline BMI were not associated with an increased risk of CVD, such changes were consistently associated with increased risk of DM. There was also some evidence of an increased risk of CVD with a decrease in BMI

    Lipid Profiles in HIV-Infected Patients Receiving Combination Antiretroviral Therapy: Are Different Antiretroviral Drugs Associated with Different Lipid Profiles?

    Get PDF
    Levels of triglycerides (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-c), and high-density lipoprotein cholesterol (HDL-c), as well as the TC:HDL-c ratio, were compared in patients receiving different antiretroviral therapy regimens. Patients receiving first-line regimens including protease inhibitors (PIs) had higher TC and TG levels and TC:HDL-c ratios than did antiretroviral-naive patients; patients receiving 2 PIs had higher levels of each lipid. Ritonavir-containing regimens were associated with higher TC and TG levels and TC:HDL-c ratios than were indinavir-containing regimens; however, receipt of nelfinavir was associated with reduced risk of lower HDL-c levels, and receipt of saquinavir was associated with lower TC:HDL-c ratios. Patients receiving nonnucleoside reverse-transcriptase inhibitors had higher levels of TC and LDL-c than did antiretroviral-naive patients, although the risk of having lower HDL-c levels was lower than that in patients receiving a single PI. Efavirenz was associated with higher levels of TC and TG than was nevirapin

    Whirlpool: Improving Dynamic Cache Management with Static Data Classification

    Get PDF
    Cache hierarchies are increasingly non-uniform and difficult to manage. Several techniques, such as scratchpads or reuse hints, use static information about how programs access data to manage the memory hierarchy. Static techniques are effective on regular programs, but because they set fixed policies, they are vulnerable to changes in program behavior or available cache space. Instead, most systems rely on dynamic caching policies that adapt to observed program behavior. Unfortunately, dynamic policies spend significant resources trying to learn how programs use memory, and yet they often perform worse than a static policy. We present Whirlpool, a novel approach that combines static information with dynamic policies to reap the benefits of each. Whirlpool statically classifies data into pools based on how the program uses memory. Whirlpool then uses dynamic policies to tune the cache to each pool. Hence, rather than setting policies statically, Whirlpool uses static analysis to guide dynamic policies. We present both an API that lets programmers specify pools manually and a profiling tool that discovers pools automatically in unmodified binaries. We evaluate Whirlpool on a state-of-the-art NUCA cache. Whirlpool significantly outperforms prior approaches: on sequential programs, Whirlpool improves performance by up to 38% and reduces data movement energy by up to 53%; on parallel programs, Whirlpool improves performance by up to 67% and reduces data movement energy by up to 2.6x.National Science Foundation (U.S.) (grant CCF-1318384)National Science Foundation (U.S.) (CAREER-1452994)Samsung (Firm) (GRO award
    • …
    corecore